Model Inversion Attacks Against Graph Neural Networks

نویسندگان

چکیده

Many data mining tasks rely on graphs to model relational structures among individuals (nodes). Since are often sensitive, there is an urgent need evaluate the privacy risks in graph data. One famous attack against analysis models inversion attack, which aims infer sensitive training dataset and leads great concerns. Despite its success grid-like domains, directly applying attacks non-grid domains such as poor performance. This mainly due failure consider unique properties of graphs. To bridge this gap, we conduct a systematic study Graph Neural Networks (GNNs), one state-of-the-art tools paper. Firstly, white-box setting where attacker has full access target GNN model, present GraphMI private Specifically GraphMI, projected gradient module proposed tackle discreteness edges preserve sparsity smoothness features; auto-encoder used efficiently exploit topology, node attributes, parameters for edge inference; random sampling can finally sample discrete edges. Furthermore, hard-label black-box only query API receive classification results, propose two methods based estimation reinforcement learning (RL-GraphMI). With methods, connection between risk influence show that with greater more likely be recovered. Extensive experiments over several public datasets demonstrate effectiveness our methods. We also under defenses: well-designed differential training, other preprocessing. Our experimental results defenses not sufficiently effective call advanced attacks.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

Machine learning systems based on deep neural networks, being able to produce state-of-the-art results on various perception tasks, have gained mainstream adoption in many applications. However, they are shown to be vulnerable to adversarial example attack, which generates malicious output by adding slight perturbations to the input. Previous adversarial example crafting methods, however, use s...

متن کامل

Inversion of a velocity model using artificial neural networks

We present a velocity model inversion approach using artificial neural networks (NN). We selected four aftershocks from the 2000 Tottori, Japan, earthquake located around station SMNH01 in order to determine a 1D nearby underground velocity model. An NN was trained independently for each earthquake-station profile. We generated many velocity models and computed their corresponding synthetic wav...

متن کامل

Timing Attacks against the Syndrome Inversion in Code-Based Cryptosystems

In this work we present new timing vulnerabilities that arise in the inversion of the error syndrome through the Extended Euclidean Algorithm that is part of the decryption operation of code-based Cryptosystems. We analyze three types of timing attack vulnerabilities theoretically and experimentally: The first allows recovery of the zero-element of the secret support, the second is a refinement...

متن کامل

Iterative Neural Network Model Inversion

Recently model based techniques have become wide spread in solving measurement, control, identification, etc. problems. For measurement data evaluation and for controller design also the so called inverse models are of considerable interest. In this paper a technique to perform neural network inversion is introduced. For discrete time inputs the proposed method provides good performance if the ...

متن کامل

Clipping Free Attacks against Neural Net-

During the last years, a remarkable breakthrough has been made in AI domain thanks to artificial deep neural networks that achieved a great success in many machine learning tasks in computer vision, natural language processing, speech recognition, malware detection and so on. However, they are highly vulnerable to easily crafted adversarial examples. Many investigations have pointed out this fa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Knowledge and Data Engineering

سال: 2022

ISSN: ['1558-2191', '1041-4347', '2326-3865']

DOI: https://doi.org/10.1109/tkde.2022.3207915